skip to main content


Search for: All records

Creators/Authors contains: "Rekleitis, Ioannis"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This paper explores the problem of deploying machine learning (ML)-based object detection and segmentation models on edge platforms to enable realtime caveline detection for Autonomous Underwater Vehicles (AUVs) used for under-water cave exploration and mapping. We specifically investigate three ML models, i.e., U-Net, Vision Transformer (ViT), and YOLOv8, deployed on three edge platforms: Raspberry Pi-4, Intel Neural Compute Stick 2 (NCS2), and NVIDIA Jetson Nano. The experimental results unveil clear tradeoffs between model accuracy, processing speed, and energy consumption. The most accurate model has shown to be U-Net with an 85.53 F1-score and 85.38 Intersection Over Union (IoU) value. Meanwhile, the highest inference speed and lowest energy consumption are achieved by the YOLOv8 model deployed on Jetson Nano operating in the high-power and low-power modes, respectively. The comprehensive quantitative analyses and comparative results provided in the paper highlight important nuances that can guide the deployment of caveline detection systems on underwater robots for ensuring safe and reliable AUV navigation during underwater cave exploration and mapping missions. 
    more » « less
    Free, publicly-accessible full text available December 15, 2024
  2. Vision-based state estimation is challenging in underwater environments due to color attenuation, low visibility and floating particulates. All visual-inertial estimators are prone to failure due to degradation in image quality. However, underwater robots are required to keep track of their pose during field deployments. We propose robust estimator fusing the robot's dynamic and kinematic model with proprioceptive sensors to propagate the pose whenever visual-inertial odometry (VIO) fails. To detect the VIO failures, health tracking is used, which enables switching between pose estimates from VIO and a kinematic estimator. Loop closure implemented on weighted posegraph for global trajectory optimization. Experimental results from an Aqua2 Autonomous Underwater Vehicle field deployments demonstrates the robustness of our approach over different underwater environments such as over shipwrecks and coral reefs. The proposed hybrid approach is robust to VIO failures producing consistent trajectories even in harsh conditions. 
    more » « less
    Free, publicly-accessible full text available September 25, 2024
  3. Underwater caves are challenging environments that are crucial for water resource management, and for our understanding of hydro-geology and history. Mapping underwater caves is a time-consuming, labor-intensive, and hazardous operation. For autonomous cave mapping by underwater robots, the major challenge lies in vision-based estimation in the complete absence of ambient light, which results in constantly moving shadows due to the motion of the camera-light setup. Thus, detecting and following the caveline as navigation guidance is paramount for robots in autonomous cave mapping missions. In this paper, we present a computationally light caveline detection model based on a novel Vision Transformer (ViT)-based learning pipeline. We address the problem of scarce annotated training data by a weakly supervised formulation where the learning is reinforced through a series of noisy predictions from intermediate sub-optimal models. We validate the utility and effectiveness of such weak supervision for caveline detection and tracking in three different cave locations: USA, Mexico, and Spain. Experimental results demonstrate that our proposed model, CL-ViT, balances the robustness-efficiency trade-off, ensuring good generalization performance while offering 10+ FPS on single-board (Jetson TX2) devices. 
    more » « less
    Free, publicly-accessible full text available October 1, 2024
  4. IEEE (Ed.)
    This paper addresses the robustness problem of visual-inertial state estimation for underwater operations. Underwater robots operating in a challenging environment are required to know their pose at all times. All vision-based localization schemes are prone to failure due to poor visibility conditions, color loss, and lack of features. The proposed approach utilizes a model of the robot's kinematics together with proprioceptive sensors to maintain the pose estimate during visual-inertial odometry (VIO) failures. Furthermore, the trajectories from successful VIO and the ones from the model-driven odometry are integrated in a coherent set that maintains a consistent pose at all times. Health-monitoring tracks the VIO process ensuring timely switches between the two estimators. Finally, loop closure is implemented on the overall trajectory. The resulting framework is a robust estimator switching between model-based and visual-inertial odometry (SM/VIO). Experimental results from numerous deployments of the Aqua2 vehicle demonstrate the robustness of our approach over coral reefs and a shipwreck. 
    more » « less
    Free, publicly-accessible full text available May 29, 2024
  5. In this paper, we present a system for measuring water quality, with a focus on detecting and predicting Harmful Cyanobacterial Blooms (HCBs). The proposed approach includes stationary multi-sensor stations, Autonomous Surface Vehicles (ASVs) collecting water quality data, and manual deployments of vertical water sampling together with vertical water quality sensor data collection, in order to monitor the health of the lake and the progress of different types of algal blooms. Traditional water monitoring is performed by manual sampling, which is limited both in the spatial and the temporal domain. The proposed method will expand the range of measurements while reducing the cost. Human sampling is still included in order to provide a base of comparison and ground truth for the automated measurements. In addition, the collected data, over multiple years, will be analyzed to infer correlations between the different measured parameters and the presence of blooms. A detailed description of the proposed system is presented together with data collected during our first sampling season. 
    more » « less
  6. In this paper we present a complete framework for Underwater SLAM utilizing a single inexpensive sensor. Over the recent years, imaging technology of action cameras is producing stunning results even under the challenging conditions of the underwater domain. The GoPro 9 camera provides high definition video in synchronization with an Inertial Measurement Unit (IMU) data stream encoded in a single mp4 file. The visual inertial SLAM framework is augmented to adjust the map after each loop closure. Data collected at an artificial wreck of the coast of South Carolina and in caverns and caves in Florida demonstrate the robustness of the proposed approach in a variety of conditions. 
    more » « less